交互式机器学习(IML)应使智能系统能够从其最终用户进行交互式学习,并迅速变得越来越重要。尽管它将人类置于循环中,但相互作用主要是通过错过上下文信息的相互解释来执行的。此外,Caipi等当前的模型IML策略仅限于“破坏性”反馈,这意味着它们仅允许专家阻止学习者使用无关的功能。在这项工作中,我们提出了一个新颖的互动框架,称为文本域的语义互动学习。我们将将建设性和上下文反馈纳入学习者的问题将其作为找到一个架构的任务,以找到(a)在人与机器之间实现更多的语义对齐,并且(b)同时有助于维持输入域的统计特征。根据有意义的更正生成用户定义的反例。因此,我们介绍了一种称为smanticpush的技术,该技术可有效地将人类对人类的概念校正转换为非排除培训示例,以便将学习者的推理推向所需的行为。在几个实验中,我们表明我们的方法在预测性能以及下游多级分类任务中的局部解释质量方面显然优于Caipi(一种最先进的IML策略)。
translated by 谷歌翻译
在各种机器学习应用中,表示学习已被证明是一种强大的方法。然而,对于大气动力学,迄今为止尚未考虑它,这可以说是由于缺乏可用于培训的大型,标记的数据集。在这项工作中,我们表明困难是良性的,并引入了一项自我监督的学习任务,该任务定义了各种未标记的大气数据集的绝对损失。具体而言,我们在简单而复杂的任务上训练神经网络,即预测与不同但附近的大气场之间的时间距离。我们证明,对ERA5重新分析进行此任务的培训会导致内部表示,从而捕获了大气动态的内在方面。我们通过为大气状态引入数据驱动的距离度量来做到这一点。当在其他机器学习应用程序中用作损失功能时,与经典$ \ ell_2 $ -loss相比,该ATMODIST距离会改善结果。例如,对于缩小缩放,一个人获得了更高的分辨率字段,该字段比以前的方法更接近真正的统计信息,而对于缺失或遮挡数据的插值,ATMODIST距离导致的结果导致包含更真实的精细规模特征的结果。由于它来自观察数据,因此Atmodist还提供了关于大气可预测性的新观点。
translated by 谷歌翻译
我们通过与与前面令牌的局部相似度,通过调节从大语料库检索的文档块来增强自动回归语言模型。尽管使用25美元\时分,我们的检索增强型变压器(RetroCro)的检索增强型变压器(RetroCr)对GPT-3和侏罗纪-1获得了可比性的性能。微调后,复古表演转换为下游知识密集型任务,如问题应答。复古结合了冷冻BERT猎犬,一种可微分的编码器和块状的横向机制,以预测基于数量级的令牌,而不是训练期间通常消耗的数量。我们通常从头开始训练复古,还可以快速改造预先接受的变压器,通过检索,仍然达到良好的性能。我们的工作通过以前所未有的规模开辟了通过显式内存改进语言模型的新途径。
translated by 谷歌翻译
人为强迫下未来气候情景的预测对于了解气候变化至关重要,并评估潜在的反演技技术的影响。用于这种预测的机器学习和混合技术依赖于对依赖性敏感但往往微妙的影响的信息测量。对于大气动态,气候系统的关键部分,不确定度量,目前仍然在实践中使用目视检查。然而,这种“眼球度量标准”不能用于机器学习,其中需要算法描述。通过中间神经网络激活的成功作为学习指标的基础,例如,在计算机愿景中,我们提出了一种专为大气动态而设计的新颖,自我监督的代表性学习方法。我们的方法称为Atmodist,在简单的辅助任务上培训一个神经网络:预测随机混合的大气领域的元件之间的时间距离(例如,风场的组件从再分析或模拟中的风场的组件)。该任务迫使网络将数据的重要内在方面作为其层中的激活学习,因此可以获得判别度量。我们通过使用atmodist来证明这一点来定义基于GaN的超分辨率的度量的度量和发散。我们的上部数据在视觉上以及其统计数据方面与高分辨率相匹配,并且基于平均平方误差显着优于最先进的。由于大气逻辑是无监督,因此只需要一个时间的字段序列,并且使用简单的辅助任务,它有可能在各种应用中具有实用性。
translated by 谷歌翻译
Logic Mill is a scalable and openly accessible software system that identifies semantically similar documents within either one domain-specific corpus or multi-domain corpora. It uses advanced Natural Language Processing (NLP) techniques to generate numerical representations of documents. Currently it leverages a large pre-trained language model to generate these document representations. The system focuses on scientific publications and patent documents and contains more than 200 million documents. It is easily accessible via a simple Application Programming Interface (API) or via a web interface. Moreover, it is continuously being updated and can be extended to text corpora from other domains. We see this system as a general-purpose tool for future research applications in the social sciences and other domains.
translated by 谷歌翻译
The analysis of network structure is essential to many scientific areas, ranging from biology to sociology. As the computational task of clustering these networks into partitions, i.e., solving the community detection problem, is generally NP-hard, heuristic solutions are indispensable. The exploration of expedient heuristics has led to the development of particularly promising approaches in the emerging technology of quantum computing. Motivated by the substantial hardware demands for all established quantum community detection approaches, we introduce a novel QUBO based approach that only needs number-of-nodes many qubits and is represented by a QUBO-matrix as sparse as the input graph's adjacency matrix. The substantial improvement on the sparsity of the QUBO-matrix, which is typically very dense in related work, is achieved through the novel concept of separation-nodes. Instead of assigning every node to a community directly, this approach relies on the identification of a separation-node set, which -- upon its removal from the graph -- yields a set of connected components, representing the core components of the communities. Employing a greedy heuristic to assign the nodes from the separation-node sets to the identified community cores, subsequent experimental results yield a proof of concept. This work hence displays a promising approach to NISQ ready quantum community detection, catalyzing the application of quantum computers for the network structure analysis of large scale, real world problem instances.
translated by 谷歌翻译
The following article presents a memetic algorithm with applying deep reinforcement learning (DRL) for solving practically oriented dual resource constrained flexible job shop scheduling problems (DRC-FJSSP). In recent years, there has been extensive research on DRL techniques, but without considering realistic, flexible and human-centered shopfloors. A research gap can be identified in the context of make-to-order oriented discontinuous manufacturing as it is often represented in medium-size companies with high service levels. From practical industry projects in this domain, we recognize requirements to depict flexible machines, human workers and capabilities, setup and processing operations, material arrival times, complex job paths with parallel tasks for bill of material (BOM) manufacturing, sequence-depended setup times and (partially) automated tasks. On the other hand, intensive research has been done on metaheuristics in the context of DRC-FJSSP. However, there is a lack of suitable and generic scheduling methods that can be holistically applied in sociotechnical production and assembly processes. In this paper, we first formulate an extended DRC-FJSSP induced by the practical requirements mentioned. Then we present our proposed hybrid framework with parallel computing for multicriteria optimization. Through numerical experiments with real-world data, we confirm that the framework generates feasible schedules efficiently and reliably. Utilizing DRL instead of random operations leads to better results and outperforms traditional approaches.
translated by 谷歌翻译
The acquisition of high-quality human annotations through crowdsourcing platforms like Amazon Mechanical Turk (MTurk) is more challenging than expected. The annotation quality might be affected by various aspects like annotation instructions, Human Intelligence Task (HIT) design, and wages paid to annotators, etc. To avoid potentially low-quality annotations which could mislead the evaluation of automatic summarization system outputs, we investigate the recruitment of high-quality MTurk workers via a three-step qualification pipeline. We show that we can successfully filter out bad workers before they carry out the evaluations and obtain high-quality annotations while optimizing the use of resources. This paper can serve as basis for the recruitment of qualified annotators in other challenging annotation tasks.
translated by 谷歌翻译
We present NusaCrowd, a collaborative initiative to collect and unite existing resources for Indonesian languages, including opening access to previously non-public resources. Through this initiative, we have has brought together 137 datasets and 117 standardized data loaders. The quality of the datasets has been assessed manually and automatically, and their effectiveness has been demonstrated in multiple experiments. NusaCrowd's data collection enables the creation of the first zero-shot benchmarks for natural language understanding and generation in Indonesian and its local languages. Furthermore, NusaCrowd brings the creation of the first multilingual automatic speech recognition benchmark in Indonesian and its local languages. Our work is intended to help advance natural language processing research in under-represented languages.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译